Skip to main content

Run Your Own Server Podcast Episode 08: Hardware

Audio Preview

Run Your Own Server Podcast Episode 08: Hardware

Publication date 2006-08-27
Language English
RYOS, Episode 8 - Hardware

Thud: The "Run Your Own Server" podcast for August 25th, 2006.

[theme music]
Thud: In this episode: Hardware, CPUs, RAM, motherboards, hard-drives, RAID, network cards, and a moment of sac.

[theme music continues]
Thud: This episode's reverse sponsor is Soekris. Soekris Engineering Inc, is a small company specializing in the design of embedded computer and communication devices. If you want a small server with low power requirements, this is the place to go. Check them out at

[theme music continues]
Thud: OK, to start off this episode we want to take care of some housekeeping. The first thing is, we want to apologize for not being regular with the release of our episodes. It's been nearly a month since we recorded the last one, and personally I have to say that I'm really sorry we can't do this more often. We really have only gotten started and we've already got quite a few subscribers. I just want to apologize to everybody. I hope you guys stay subscribed and keep listening.

But then on the flip side of that, you have to understand that we're real sys-admins, so from time to time we have bad weeks, and in this case a bad month, where there's just so many fires that we have to fight, and so many customers that we have to try to make happy, that by the time we have a free moment we generally doze off instead of recording a pod cast.

Gek: It's not something that we will make a habit of. We do want to try and get the show out once a week, but it's really hard when there are two of us. I might be putting out fires and be fine to record one week and know, it's just too hard for both of us to make sure we have time. But this past month was crazy, so hopefully it won't be as bad in the future.

Thud: all right, and the next little bit of housekeeping we want to do is take care of our first audio question. Our first real audio question is from Binary Alcove. He's currently living on the Internet, like a bunch of people, so here's his question.

Binary Alcove: Hey guys, I have a couple questions for you. The first one is about backups. Right now I have a Fedora Core Four Box, and I'm running a cron job that copies one disc to the other, and it's working out OK, but I'm pretty sure you know something better. My second question is about desktop management. I need the rundown on that one, you know: a good server, operating system for it and what kind of technologies to do it with. Any help you give would be great. You know, I love your show and all that good stuff, so keep them coming. Thanks, bye.

Thud: OK, so the first part of his question was about backups. Oddly enough, that's what we're planning for our next episode so we'll get into some extreme detail with that, but just generally, Gek, give us some general pointers for backups.

Gek: Backups are actually what I do where I work, it's one of my main tasks, and if you can't afford backup software like Net Backup, like most people can't, the best thing that I use is Rsync or Tar to do some sort of script magic where I copy a full backup and then I do incremental Tars, or for the stuff that I Rsync, I just Rsync everything to one disc and then I do Tars of the parts of that disc that I care about.

We will go into backups in a lot more detail in the next episode, and to answer your question real quick, what I like to use instead of just copying files, Rsync is really nice, there's a lot of switches you can use to synchronize two discs, you can even exclude certain subdirectories, which is something I do. There's a lot of pictures that I've collected from clipart sites on the web, and I don't necessarily need to back those up, I can just exclude them from my Rsync. I don't care if I lose them. And Tar is the other one. Tar is tried and true, and it works great. You can compress things, you can do incrementals, you can do differentials, so it's great for doing backups.

Thud: Yeah, actually most of the backup systems I've set up were using Tar in some form or another, just because it's easy to do. It does mean you have to have at least twice as much disk space, because the original Tar is going to be exactly the same size as the system you're backing up, and from there you can compress it, which takes additional disk space. But I think the most important point to make now is that you want to get the backups off of the server.

Copying it over to another hard-drive is good, and in a lot of cases it's really the fastest way to do it, and it's also the fastest way to restore. So if you do have an issue where a file gets overwritten or something, the backup is right there on the drive ready to restore. But for safety's sake you need to get it off the server. Some people use tape. Some people just have a dedicated backup server or file server they just dump it on.

There's a lot of ways you can do that. Rsync you can just actually do over the network, or you can use FTP, or you can even do it with SSH. But it's important to get it off of the server, because if you have a crash, or your server gets hacked into, or there's a virus or something, it could wipe out everything, including your backups. By having it off of the server there's less chance of an issue with the first server affecting the backups on the second server.

Gek: For disk-to-disk backup, that solution works fine as long as the second disk is a USB disk and when you're done doing the backup you remove it from the system. What Thud is trying to protect you from, aside from just hackers, is if you have some kind of a power surge that takes out your box, you don't want it taking out the backup hard-drive as well.

Thud: Exactly. And USB works well. What about the second part of his question?

Gek: That's a tough one, and it depends. I haven't done desktop support in a long time, and if you're supporting an environment that's going to be Windows machines, I would almost say that you have to have a domain and use group policy just because that's the best tool for the job that I've ever seen. With a Nix solution, with a Linux solution, it's actually a lot easier. There are a lot of different programs out there, CF Engine is one that I've heard mentioned a lot. I've actually never used it. And I am working on my own configuration program that will push out configuration changes to servers, but it's been a long time since I dealt with configuring lots of workstations.

Thud: Yeah. I would say I'm pretty much the same way. Most of what I do, day to day, is server-related. For workstations, it really depends on a number of things: what the OS is for the workstations, and the number of workstations, are the two major things.

If they are Windows workstations, there are lots of Windows tools out there for managing them; unfortunately, they all cost money.

If they're Unix workstations, if there's not a large number of them, you can just SSH into the boxes and upload patches and install them and do all of that good stuff all from the command line.

If you go with something commercial---like a Red Hat, for example: Red Hat has a Red Hat subscription that can take care of doing patches and stuff like that. There are a lot of things that you can do; but it really depends on what your particular environment is.

Gek: Also with Red Hat, one of the things that would make the life of a SysAdmin easier for doing desktops is Kickstart. I mean, if you set up a Kickstart image, then you can basically Kickstart the workstations 95 percent of the way to where you need them to be. And that helps, so that you're not sitting there, tweaking this or that: you just change your Kickstart image; and, every time you build a machine, it's what your current image is supposed to be.

Thud: Yeah. Kickstart is great. Windows has something similar, called RIS, that allows you to build systems automatically. It's actually kind of funny that you mention it, because I have a friend who emailed me last weekend; and what they've decided to do at their office, which has about 20 machines, is that, every weekend, he has an automated job that Kickstarts all of the workstations back to their default system, because he was having so many problems with users changing all the settings and installing their own software, or whatever, that he just decided it was easier to manage in that way. He got management to approve doing it in that way. And the first Monday after a reinstall wasn't very happy for most of the people in his office, because, suddenly, all of their pictures were gone and all their email was gone, even though he told them "Don't store it there; store it on the server." You know how users can be: they can be quite deaf sometimes.

So we'd like to thank Binary Alcove for his question, and encourage anybody out there who has other questions for us: you can either send it in an email to podcast att, or just send us an MP3 file with your recording and we'll play it on the air. And, if we play it on the air, we'll reward you, over the next few episodes, by trying not to get your name wrong.

All right; so let's jump right into the hardware show and talk about one of the more important things in a server: the CPU. Tell us why you need a decent CPU for a server.

Gek: Well, it kind of depends on what you're trying to do with the box. But, for most servers, you are trying to run an application, whether it's a database, a mail server---or, in my case, I have a fractal-rendering server. You want something that has a really good CPU, because you want it to process whatever is thrown at it and move on to the next thing. You generally don't want your CPU to be at a hundred percent all the time. That's a really good indicator that you need a bigger box. And it's not something that you want to run into a year or two after you put the box in place. Generally, you want to try to make your server so that it's designed to handle whatever your load is going to be for---I usually go for, like, the next three years.

Thud: There are two big manufacturers of processors: Intel and AMD. What's your experience as far as differences go?

Gek: Well, I haven't used AMD's much; but I know a lot of people who do. And it used to be that AMD's had a serious overheating program and the CPUs were kind of fragile. I've heard that that's not the case anymore; and the benchmarks that I've seen for AMD look really, really promising. Price-wise, AMD is much cheaper than Intel. But I have always been an Intel guy, just because that's what I know and that's what I'm comfortable with. How about you? Have you ever used AMD?

Thud: Yeah. There's actually been a few projects in which I've used AMD. But I've also had a lot of friends who were into gaming quite a bit who, for a long time---just everything was AMD, because you got better price for the punch that you were getting from the processor. Again, they were having a lot of problems with heat and keeping the processor cool. But I've always been kind of the same way: the way I've looked at it is that AMD is always trying to emulate the Intel processors; AMD did have the 64-bit processors really out on the market before Intel did, but I've always been an Intel guy. Even for the motherboards, I buy an Intel processor and an Intel motherboard, if I can afford to do it. Everything just seems to run better that way.

O.K. One of the things that come up when we're talking about processors is speed. Is there really any difference between a 2.4-gigahertz and a 3-gigahertz for most things, performance-wise?

Gek: In my, experience, you don't get a big performance hike in just gigahertz or megahertz. It seems to be more when you jump from one family to another, although, with the AMDs, it's a little more confusing to me, because they haven't done a numbering scheme for their versions: they seem to be doing naming schemes; and that's harder for me to follow if I'm not sitting there reading AMD's page every day. I haven't seen a big difference in going from a 2.4-gigahertz to a 3-gigahertz processor; but I did see a huge jump when I went from, say, Pentium III to Pentium 4. And, for what I do with fractals, it's something I can measure in hours: it saves me a lot of time to have a processor that's of a different generation. But the megahertz or gigahertz haven't really had that big of an effect on rendering the fractals.

Thud: What about L2, or Level 2, cache?

Gek: I know that for a lot of things, it does speed things up; but I haven't really had a lot of experience with chips that had more than half a meg. Have you ever used anything? A know they come in 1-meg and 2-meg caches; I've never used them.

Thud: Yeah; it actually does help quite a bit, especially if you're doing repetitive tasks, in which you're running, more or less, the same calculation over and over again, because the processor can kind of cache the response: you know, "I ran it ten minutes ago, and this is what the answer was: so there's no point in my running it again; it's the same answer." So, the bigger L2 cache, the more things you can cache and the longer you can cache them for. So it really depends. Gaming, for example, would probably benefit quite a bit from L2, because you're doing the same calculations. If you're in the same room, you're running the same calculations to render all the polygons properly. But, if you're doing fractals, for example, every calculation is going to have a different answer; it's not going to have a chance to cache anything. So it does make a difference, depending on what you're doing.

So, what about multiple processors? Does adding a second processor really make any difference for most tasks?

Gek: If your task is going to be an application or multiple applications, then it usually does. But, if what you're doing is really more hard drive-intensive---if you're just using the server as a file server---, multiple CPUs aren't going to be a huge benefit. I haven't actually used multiple CPUs for many of my servers. I think you've done more with multiple-processor systems than I have.

Thud: Yeah; on the high-end systems, it comes in really handy for one of the things that you talked about, so that you could actually have a system that's up more simply because one of the processors dies but it can still use the other one. So you have a much higher availability with them. Depending on what you are doing, if you are doing a lot of transactions like a big mail server that is not just doing mail by itself but it is also doing any virus or any spam things like that.

Those can all be pretty processor intensive and most of classes can, even if the program itself can handle multi process, allow forks of software. As an example, an email comes in, it processes through the mail sever and then, the mail server decides to check it for viruses. It forks off a process for that. It can do that on one of the available processors. It can do that come back and not once check it for any virus; it can put it off on another process.

That is really where comes handy. If you are doing things that are processor intensive and can be broken easily into chunks, then multi-processor helps. Otherwise, it does not. Most of the servers that are run for my own stuff are single processors -- just cheaper that way. When it does come to time to upgrade, I, probably, go with dual-core processor, it has still got one physical processor but it works like two just because in year's time, that is going to be the default. You are not going to be able to get just a single core processor anymore.

Gek: I will say, hyper-threading which makes a CPU look like two CPUs. I was never big fan of. One of the reasons was, when I was running my fractals, I could see a huge difference of return hyper-threading on. It was, actually, slower because you are making the processor weaker than if you just leave it as a single processor. I do not think hyper-threading is a good idea for servers, generally speaking. There, certainly, would be situations where you make sense, but in my experience, you do not run it like a virtual CPU or two virtual CPUs. You just want to run it if you had a dual-core -- like you are saying runs that one physical CPU but it has two processors on the same board.

Thud: Yeah, hyper-threading was, at least, on the server side of the mark, it was a pretty big disappointment. It did not provide nearly the performance that Intel had expected it to. It is one of the reasons why dual-core finally came around. They are facing out hyper-threading all together. Let's go ahead move on the motherboards. Gek, could you look for when you are buying a new motherboard for a server?

Gek: Basically, I have just asked myself what I am going to use the server for and most of the time, what that means, I am going to go find an entry level server board from Intel. My baseboard that I like has video and network onboard. I do not usually do much in the way of raid because I am pretty good about backing stuff up. I do not mind downtime for any of the stuff that I am doing. For customers, that is certainly not the case. Customers, I will get raid but I do not usually go with raid on the motherboard, I will get a raid controller. Thud, what do you look for? What do you consider important features of a motherboard?

Thud: It really depends on what I am doing. As I said earlier, first it has to be made by Intel. Intel is just as far as I have ever had experience with-- Intel has the most stable motherboards. They do not have all the greatest features. They are not always the fastest motherboards, but Intel motherboards; I have never had an issue with. Even doing like bios upgrades and things like that, I have never had an issue with. I have had issues with motherboards that I have killed while doing a bios upgrade because I have just decided to vibe it angst to upgrade.

This is the first thing I look for. It has to be made by Intel. Then, from there just the way what I am doing. If I am building a final server, I do not need multiple-processor, I go one processor. I probably will not need a lot of RAM. I even look to see what the maximum amount of RAM is, everything is daises two or four gigs, that is more than enough. If I am doing something that is business critical and requires a lot of uptime, I am going to look for all the high availability things. I am going to look for as many processors as I can put in it. I am going to make sure that even if I can not afford to buy all the processors like it is four processors bucks but I can only afford to put two, at least litter on down the road, I can upgrade, I can add more power to it.

Same thing for the RAM, I may only need four gigs of RAM now but it is nice to know, if I need to that I can upgrade to 16 that is what I look for. I look for a board with 16. I am kind of like you. I do not really like onboard controllers. I do not care of the motherboard has skuzzy raider, SATA raider, whatever. I am really concerned about, does it have the number of slots I need to add a cart that I want because it always has better with add-on cards, beside the fact that, it lets me pick exactly the card I want with the features on. Now that I said that, Intel is my favorite brand, do you have any favorite brands, Gek?

Gek: I usually stick with Intel. I have in the past picked up Asus and A-bit boards. I did not really have any big issues with them. I think, I did have a couple of issues with MSI board that I got but I just like you said Intel is pretty reliable. It is easy to work with and I feel like they make the chips, it makes sense to use their motherboard.

Thud: Yeah, that is kind of my thinking, as well. OK, when it comes to memory, you basically, it depends on what kinds of processor you get or what kind of motherboard you get used to, what it requires. It is good to have an idea -- once you picked up your processor motherboard -- to see what motherboard is capable of using, as far as, memory goes ECC or non-ECC or what have you.

But just in general terms, what is the main difference ECC and non-ECC?

Gek: The way I understand it, ECC is more expensive because there is parity built into it. Where on a regular memory chip, you may have one of the little memory chips on it go bad, ECC will recover from that, where regular memory may not. It is sometimes where your system gets lock ups. Beyond that, I only work with ECC on very few severs. Do you have experience with ECC?

Thud: The possibility of having a memory error and then, crushing the server, is it really outweighed that much by the expensive ECC. There are few situations like we have one big file server then it makes sense to spend the extra money to get ECC but for the most part, it really does not matter. It is not worth the extra-spend.

Gek: Do you have a favorite brand of memory that you use?

Thud: For the most part, I go with Crucial. There are a couple on the high end side. Like I spend the money on ECC, there are a couple of other companies, it really
depends on who has the cheapest price at the time but I always have really good luck with Crucial. The one thing I like about Crucial is, all memory has particular speed and for the last half of many years, you also had to worry about heat coming off of the memory, just the adding the general heat within the case. I have always found that crucial is always, they actually run faster than the speed they are expected. The heat is always much lower than most of the other memory manufacturers.

Gek: Yeah, I have used Crucial. I think for the past four or five years and I have never had one of their chips go bad on me before I replaced it or upgraded or replaced the machine. I have really good luck with them.

When you're building a server, Thud, the next think you're really going to think is hard drives. Do you usually choose SCSI or SATA?

Thud: It really depends. It really depends on what I'm doing with a particular server. It used to depend on whether or not I was going to use RAID. The SCSI RAID cards have always been, well, until recently much better performance and feature-wise than the SATA RAID cards. They've also been a whole lot more expensive. In the last couple of years I've found a really good RAID card manufacturer for SATA drives -- Threeware -- is the manufacturer. I really love the RAID cards, I don't see any reason anymore to go SCSI unless you're going extreme high-end. SCSI is still more expensive, if you're talking about an 80 GB drive, there's not a whole lot of price difference, but a 140 GB drive or a 300 GB for SCSI is much, much more expensive than the same size drive in SATA. But other than that, from a technical standpoint there isn't a whole lot of difference.

Gek: What about speed? I know, when you're talking about speed on a hard drive, there are a few different things that matter, the rpms and then the throughput on your controller, whether it's SCSI or SATA. What do you usually chose? How do you go about determining what kind of speed you need and what are differences?

Thud: What it comes down is where the hard drive is going to sit in the operating system. Especially on servers, I tend to choose fast hard drives for the boot drive. When you have to reboot the server it makes the process a whole lot faster because it can pull the data off the drive a lot faster. For the SATA drives, for example, I tend to use the Western Digital Raptors, which are 10,000 rpm. For data drives, it doesn't really matter that much because generally by the time the OS is loaded, the drives and the system can cache, so pulling data off a data drive isn't nearly as important.

It also saves some money, the faster drives are more expensive and there are also size limits. I think the biggest 10,000 rpm SATA drive that I know of is only 150 Gb. But Seagate has 150 GB SATA drives now. They are not 10,000 rpm, but they can hold a lot of storage and in certain environments, that's what you're after, you want more space over more speed.

It also saves some money, the faster drives are more expensive and there are also size limits. I think the biggest 10,000 rpm SATA drive that I know of is only 150 Gb. But Seagate has 150 GB SATA drives now. They are not 10,000 rpm, but they can hold a lot of storage and in certain environments, that's what you're after, you want more space over more speed.

It just depends on what you're doing. For most of the hardware you have to cater what you're designing to what your end goal is. You don't want to just have a standard that you use on everything. You'll end up wasting a lot of money that way.

OK, let's talk about troubleshooting hard drives. Gek, do you have any tips for troubleshooting hard drive problems?

Gek: When it comes to hard drives, it can be really tricky. A lot of the problems can manifest the same symptoms like other things, like a bad CPU or a bad memory. Really, you have to look for files getting corrupted. Sometimes you'll know, it'll be obvious because you'll hear the little heads on the hard drive clunking around, then you know it's just time to toss it. But I have not had a lot of luck with software tools and trying to do diagnostics on hard drives. Have you ever had luck with dd or anything helping you diagnose a bad drive?

Thud: Well, it really depends. On SCSI drives there is usually built into the hardware of the SCSI subsystem the ability to detect hard drive errors. It makes it a lot easier to figure out exactly if it is a hard drive problem or if it's something else. On SATA drives it's a little bit more difficult because not all SATA drives support SMART and not all controllers fully support SMART so it's sometimes a little bit more difficult. But generally, I'm troubleshooting stuff on SCSI drives. If I have a SATA drive problem or I even think a SATA drive is causing a problem, I just replace it. I generally don't get the large size so they're not that expensive.

Gek: Yeah, I haven't done much with SATA, most of my machines are still IDE, that's the way I go. IDE drives are so cheap that if I even suspect it's going bad, I just shelf it and get a new one.

Thud: All right, do you have any favorite brands for hard drives?

Thud: I have always had really good luck with Maxtor. I don't know, I think that hard drives are kind of one of those where everybody's got a favorite and nobody agrees, that has been my experience. But I'm a big fan of Maxtor, how about you?

Gek: Yeah, I've actually been a Maxtor fan for quite a while. There was a time where I would have basically refused to use Seagate, but over the past couple of years, I think most of the manufacturers are really about the same. They have about the same failure rate, no matter which brand you use. What it really comes down to is, if you want a drive with the latest and greatest technology, like Seagate right now I think is the only manufacturer for 750 GB SATA drives. So if you want a 750, you want to go with Seagate. But as the other manufacturers catch up, for the most part, they're all going to be the same.

Thud: OK, now let's talk about RAID. Gek, what's RAID?

Gek: RAID is "Redundant Array of Independent Disks" or "Drives," I don't remember which. It's different standards for either keeping your data redundant or striping drives together so that you end up getting a larger drive, it looks like a larger drive to the operating system. Or parity, you can use it for parity on a stripe also.

Thud: Why don't you tell us the different RAID numbers and what they mean?

Gek: Actually, before I do that, let me go over a little bit of why you might want to use RAID because the numbers don't make sense without that. Basically, the main reason why you want to use RAID is that it's not a form of backup or anything like that, it just protects against a drive failure. So, as we all know, hard drives fail and the idea is that if you're running a server, especially if you're making money with it, it's much better to have as much uptime as possible on that server as you can manage. RAID is one of the ways you can do that. From just being a system admin, it's much more convenient when a drive fails that your system is up and running, it's not a fire that you have to take care of right now, you can replace the drive when you get to it over the next hour or two. That's the main benefit from RAID, just the production from drive loss.

At the different levels of RAID you have protection against just one drive loss or against a few drives that you can lose and still keep the system up and running. Now, for the raid levels, the basic ones, as far as I know, all hardware RAID controllers support are 0 and 1.

0 is just striping and it's basically like gluing two hard drives together. So if you have two 100 GB drives, when your system looks at it, it will look like one 200 GB drive. It doesn't give you any protection at all, the only reason why you would ever really want to use that is that it makes reads and writes quite a bit faster. As an example, if you're writing a 100 Mb file, instead of having one drive trying to write the entire file to the disk, you now have two drives, each one writes half of the bits to the drive, so you can write it twice as fast. The other one, RAID 1, is mirroring. In that case, you have two drives that are exact copies of each other on the bit level. So if you have two 100 GB drives, your system only sees one of them, or sees it as one drive. The benefit there is you can lose one drive and still have your system up and running. Most hardware RAID controllers now support hot plugging, if your hard drives support it. So you can actually replace the drive, rebuild the mirror, and never have to take your system down.

There are a number of other RAID levels, a lot of them are proprietary, but the other two majors ones are RAID 5, which you can think of it as the combination of the benefits for one and 0. First of all, it requires at least three drives. In a three drive configuration, a third of each drive is only used for parity, so if you lose a drive you can access the data or rebuild that drive from some of the information that is stored on the other remaining drives that's a copy of what is on the drive that has failed. In the case of a 100 GB drives, if you have three of them, you have 300 GB total storage, but your system will see 200 GB. Because you are only getting two-thirds of each of the drives, so the way that RAID five works, even if you have five drives, or 10 drives, you are always going to give up one drive's worth of space to the parity that is spread across each of the drives.

Again, the benefit there is you can lose a drive, and you can also smash the drives together so you get a lot more storage you can use. The other one, which of all the ones that are useful, is probably the most expensive is RAID 10. The reason why it is expensive is, like RAID 5, you're going to lose some of the usable drive space. In the case of RAID 10, you're going to lose 50% of it. So you have to have an even number of drives, and it is a perfect combination of RAID 0 and RAID 1. If you have six drives, for example, each pair is mirrored, so you end up basically with three mirrors, and then where the RAID 0 comes in, it stripes all three of those mirrors together.

If you have six 100 GB drives, you only have 300 GB of usable space. The big benefit there is that for disk access, it is actually faster than RAID five and it protects against the loss of more drives, depending on which drives they are, you could lose 50% of your hard drives, and the system would still be up and running and you would still have access to all the data.

RAID 10 is also known, depending on what controller it is, because there are different ways to combine the RAID one and RAID 0, RAID 10 is also known as one plus zero, or zero plus 1. It really depends on which part happens first, the mirroring or the striping. Gek, when it comes to SCSI or SATA RAID controllers, do you have any particular brands that you like?

Gek: I haven't really done much with SATA RAID controllers but for SCSI, I've always been a big fan of Adaptec. I mean, they're a little more expensive, especially when you get into the really, really nice Adaptec cards, but I've never had one go bad in such a way that it totally ruined the drives that were attached to it. I've had really good luck with them, where if the controller goes bad, even if it takes out a drive with it, I can take the same make and model card, pop it in, and it'll work and I can recover the data off the drives.

I've actually been burned by Adaptec a couple of times on SCSI stuff, so much so that I try to avoid them if I can. Right now my favorite SCSI RAID cards are either LSI, a chipset that is LSI because there are a number of manufacturers that make them.

The nice thing is that the operating system manufacturers basically only have to have one driver. Say they have 10 different models of RAID card, the one driver works across all of them. The more expensive cards, of course, you have more features but you just have one driver to deal with. They also have some really good utilities for Linux so that you can manage the RAID card with the system up and running just from the command line.

On the SATA side of the house, I've always been a big fan of 3Ware. 3Ware is the only true SATA RAID card manufacturer out there. Adaptec has them, Promise makes some, but I've always felt that 3Ware is much better. I've always got better performance and I've never had an issue with 3Ware cards, plus the fact that they make cards now that you can plug in up to 16 drives into.

That's just amazing to me. If you need a file server, you go out and get 16 750 GB drives, plug them into a 3Ware card, and you're off and running, and you have just ridiculous amounts of storage.

Thud: OK, let's move on to network hardware. Since this is on the server side, I guess we're just talking about network cards. Gek, what can you tell us about network cards?

Gek: I've always been a fan of Intel cards. They're not the cheapest, but the definitely work. Network cards, when you think about it, you have to consider what kind of speed you're going to need for the server, and generally these days you're talking about either 100 megabit or gigabit. In my mind, if you can afford gigabit, you should get it. There's no reason to not expect to be using gigabit in the next few years. I think as the Internet grows and the technology grows, people are just going to expect you to have that kind of bandwidth, so you might as well plan for it.

Thud: Yeah, I'd have to agree on the Intel cards, especially on some of the higher end ones, and on the higher end ones I'm talking like a $30 card or at most like a $40 card. They also have onboard processors that can do some of the TCP/IP checksums. Normally your OS would take care of that but if you can offload it onto the network card, why not do it? It makes perfect sense to do it there, and that frees up your CPU and frees cycles on your system to take care of it.

So I've always gotten really good performance out of Intel cards and I've never had one go bad. I also agree on the gigabit. You might as well go ahead and get a gigabit, even if you're not using it in your file server now or your mail server right now, it is definitely something you will want to look at down the road and, for the most part, you can take your network card and keep it with your upgrade. So if you replace your motherboard or you upgrade to a completely new system, you can take your PCI network card with you and you get much more value out of it that way.

One of the things to look at when you are designing a server is the size, especially if you are going to have this hosted somewhere. They are going to charge you depending on the size of your server, so you want to get it as small as possible. Right now the standard is a 1U. The problem with 1U servers, which are roughly an inch and a half tall, is that you don't have much space to put any type of PCI cards in it. At most, you might be able to fit two cards in, but if you need a lot of interfaces, like if you're doing a firewall or you want a back-end file server for your web servers or mail servers, you need a lot more ports. Just about every card manufacturer our there at least makes dual port network cards so it's basically two network cards that fit in one PCI slot, it's kind of all smushed together onto one card. And on the higher-end some manufacturers even have quad cards that have four usable network ports in one PCI slot. There's some performance degradation on that, but not enough that it would be noticeable in most real-world environments.

One of the other things that you can consider with network cards is...what I like to do, and I have done this everywhere I've worked, is I use one interface exclusively for maintenance, backups, monitoring, and that's it, nothing else. And then I use another interface, that's the one that actually talks to either the public; out to the Internet, or if I'm just talking to other servers. I have one network that's just for my servers and then another one that I use as my maintenance network. I like doing that just because if I'm doing backups I don't want my backups to be fighting users for bandwidth on the same network card. I want the backups to have as much bandwidth as they can.

Thud: I would have to agree with that. I've even heard of some places that have dedicated backup networks, so you would have a management interface, a public interface, and then a dedicated backup interface. And that's actually really good, when we were talking about the gig interfaces, the gig network cards. That's the number one place to use it, is on the backup interface. You want those backups to be able to push as much bandwidth as possible so the backup works as quickly as it can. Now for this episode's moment of sac: Physical security. Gek, take it away.

Gek: OK. One of the things that you really do want to pay attention to is the physical condition of your box. You need to be aware of what things are typically connected to it, and what things aren't. One of the concerns that people have is key-loggers. Key-loggers are dirt cheap. They are not always obvious. You can buy...there's a hundred dollar key-logger that comes with some tape, so you can actually tape up the end of the...where your keyboard actually plugs into the computer. You attach the key-logger, tape it up, and you can make it look like it's supposed to be there, like there's no key-logger, there's nothing wrong. And, if somebody puts that on your computer, they walk away and come back two weeks later, and it's recorded all of your keystrokes for the past two weeks. They've got who knows what. User names, passwords, credit card numbers, social security numbers, letters that you wrote to friends or family, or maybe financial documents. It depends on what you've been typing. That's something that you want to watch out for.

Obviously you can't go and look behind your computer every time you go to use it, but if you take care of your machine, and you put it in such a way that it would be hard to put that on, at least unnoticed, that's something that you want to look into. And one of the things that people have developed for key-loggers and other physical security concerns, are cages, where you can basically lock a pc into a box, or just a cage, and have it so that the wires come out the back. And you can pretty much protect the system pretty well. There's always going to be ways around that. You can certainly do things outside the box that will allow you to capture data coning off the box across the network.

So, when it comes to physical security for me, I always sit down and say, "How much do I care about what's on this box? Is it something I value enough that I need to make sure nobody can get to it, or is it something where it would be inconvenient for somebody to access the data, but it's not going to kill me?" I do think that you have to give it some thought. It really is something that people overlook, and there's a reason why some of the hosting companies provide physical security, charge astronomical rates, because it's not something that's easy to accomplish. And if you can do it it's valuable, it's valuable to a lot of people. What do you usually do for physical security?

Thud: Well, I pretty much do the same thing. I try and figure out what kind of data is going to be on the box, and how I'm going to be accessing it. But one thing that I'm always sure of, is that I can trust where I'm going to be physically locating the box. Whether it's in my server closet, or it's at a data center, I have to be able to trust the people that are there. You know, they need access so they can help me remotely; if I have a box die I need them to cable it up, try rebooting it, I need them to have access to it. But I also need to be able to have a certain amount of trust that they're not going to take my box down and copy all the hard-drives, or keep all that data, or put all that data up on eBay or whatever. So, you have to think quite a bit about physical security.

You can get lockable cases, so that when you're sending your server out to a data center they can't break into the box and take the hard-drives out without actually breaking the case. But, again, if somebody has access to your box they can do anything they want with that box. They can own that box with all of the data that's on it. So, for most of us, we can't afford to have it in a place that really cares that much about physical security, or you're even just renting a server where you don't control the hardware at all. But it is definitely something you want to think about, because it is something that is often overlooked, and people have been bitten by it.

There was one case I read about where a guy was running an IRC server, and somebody that was one of the users of the server was upset because he kept getting banned from the server no matter what IP he came from--of course he was stupid and always used the same username, but that's a different story. He tracked down what data center the IRC server was in, arranged a tour, was able to see the server name actually labeled on the server, and took out a bottle of water and sprayed it in the CD drive. Taking the server down for quite a few hours while they were trying to dry it out. Things like that can happen, so you have to think about what you can do to try and prevent that. Really the most anybody can do these days is to put it in a data center where you feel comfortable with the people there, that they're actually going to be monitoring any other customers that physically have access to where your box is stored.


Thud: For show notes, or other details, please visit our website at If you would like to send us feedback, or have a question you would like to have us answer on the show, please send an email to podcast att The intro music, I Like Caffeine, is by Tom Cody. This song Down the Road, is by Rob Cosell. Please visit our website for links to their websites.


Thud: This podcast is covered under a Creative Commons license. Please visit our website for more details.



Reviewer: CivilEngine - favoritefavoritefavoritefavorite - March 11, 2013
Subject: Network Problem
What way an user as me able to Resolving and Validate own Problem Uploading Files in by Run own Serves Security System Software Storage Smart SYSTEM Safety?
SIMILAR ITEMS (based on metadata)
Run Your Own Server Podcast (2006-2008)
eye 172
favorite 0
comment 0
favoritefavoritefavoritefavorite ( 1 reviews )